# Japanese CLIP
Llm Jp Clip Vit Large Patch14
Apache-2.0
A Japanese CLIP model trained based on the OpenCLIP framework, trained on a dataset of 1.45 billion Japanese image-text pairs, supporting zero-shot image classification and image-text retrieval tasks
Text-to-Image Japanese
L
llm-jp
254
1
Llm Jp Clip Vit Base Patch16
Apache-2.0
Japanese CLIP model trained on OpenCLIP framework, supporting zero-shot image classification tasks
Text-to-Image Japanese
L
llm-jp
40
1
Clip Japanese Base
Apache-2.0
A Japanese CLIP model developed by LY Corporation, trained on approximately 1 billion web-collected image-text pairs, suitable for various vision tasks.
Text-to-Image
Transformers Japanese

C
line-corporation
14.31k
22
Featured Recommended AI Models